These models were run on all taxa at once to evaluate the relationship between summed population change and body size, trophic level, and lifespan traits, while accounting for differences in System and Class, and including a random effect by Binomial because the trait data was collected at the species level. When modelling summed lambdas, each model includes time series length to account for differences in summed lambdas in shorter vs. longer time series.
The best models included the following variables:
Summed lambdas represent the total change the population has undergone, where negative values mean the population has has a net decline over the period considered, and positive values mean the population has had a net increase in the period considered.
These are models to evaluate the relationship between summed population change and body size, trophic level, and lifespan traits, while accounting for differences in System and Class, and including a random effect by Binomial because the trait data was collected at the species level. Each model includes time series length to account for differences in summed lambdas in shorter vs. longer time series.
# null model
m0 = lmer(sumlambda ~ (1|Binomial) + tslength, data = df)
m1 = lmer(sumlambda ~ Class + (1|Binomial) + tslength, data = df)
m2 = lmer(sumlambda ~ Class + System + (1|Binomial) + tslength, data = df)
m3 = lmer(sumlambda ~ System + (1|Binomial) + tslength, data = df)
# adding traits
# body size only
# class should be in each model because body size is normalised within classes
m4 = lmer(sumlambda ~ log10(BodySize) + Class + (1|Binomial) + tslength, data = df)
m5 = lmer(sumlambda ~ log10(BodySize) + Class + System + (1|Binomial) + tslength, data = df)
m6 = lmer(sumlambda ~ Class*log10(BodySize) + (1|Binomial) + tslength, data = df)
m7 = lmer(sumlambda ~ System*log10(BodySize) + (1|Binomial) + tslength, data = df)
# trophic level only
m8 = lmer(sumlambda ~ TrophicLevel + (1|Binomial) + tslength, data = df)
m9 = lmer(sumlambda ~ TrophicLevel + Class + (1|Binomial) + tslength, data = df)
m10 = lmer(sumlambda ~ TrophicLevel + System + (1|Binomial) + tslength, data = df)
m11 = lmer(sumlambda ~ Class*TrophicLevel + (1|Binomial) + tslength, data = df)
m12 = lmer(sumlambda ~ System*TrophicLevel + (1|Binomial) + tslength, data = df)
# lifespan only
m13 = lmer(sumlambda ~ LifeSpan + (1|Binomial) + tslength, data = df)
m14 = lmer(sumlambda ~ LifeSpan + Class + (1|Binomial) + tslength, data = df)
m15 = lmer(sumlambda ~ LifeSpan + System + (1|Binomial) + tslength, data = df)
m16 = lmer(sumlambda ~ Class*LifeSpan + (1|Binomial) + tslength, data = df)
m17 = lmer(sumlambda ~ System*LifeSpan + (1|Binomial) + tslength, data = df)
# all three traits
m18 = lmer(sumlambda ~ log10(BodySize) + TrophicLevel + Class + (1|Binomial) + tslength, data = df)
m19 = lmer(sumlambda ~ log10(BodySize) + LifeSpan + Class + (1|Binomial) + tslength, data = df)
m20 = lmer(sumlambda ~ TrophicLevel + LifeSpan + (1|Binomial) + tslength, data = df)
m21 = lmer(sumlambda ~ log10(BodySize) + TrophicLevel + LifeSpan + Class + (1|Binomial) + tslength, data = df)
We compared models according to a suite of performance metrics, including:
These performance metrics are then combined into a performance score, which can be used to rank the models. The performance score is based on normalizing all indices (i.e. rescaling them to a range from 0 to 1), and taking the mean value of all indices for each model. his is mostly helpful as an exploratory metric, but is not necessarily enough to base interpretation on.
We can look at the metrics for the top 6 models below:
| Name | Model | R2_conditional | R2_marginal | ICC | RMSE | Sigma | AIC_wt | BIC_wt | Performance_Score |
|---|---|---|---|---|---|---|---|---|---|
| m16 | lmerMod | 0.0777050 | 0.0138256 | 0.0647750 | 0.7035225 | 0.7186127 | 0.00e+00 | 0 | 0.6342115 |
| m6 | lmerMod | 0.0777289 | 0.0187221 | 0.0601326 | 0.7041076 | 0.7184480 | 2.40e-05 | 0 | 0.5416461 |
| m5 | lmerMod | 0.0742245 | 0.0117707 | 0.0631977 | 0.7045316 | 0.7191726 | 1.50e-06 | 0 | 0.4252937 |
| m7 | lmerMod | 0.0737833 | 0.0113476 | 0.0631524 | 0.7046896 | 0.7191119 | 2.02e-05 | 0 | 0.4096807 |
| m2 | lmerMod | 0.0737357 | 0.0118943 | 0.0625858 | 0.7047351 | 0.7191884 | 7.31e-05 | 0 | 0.3911851 |
| m11 | lmerMod | 0.0741484 | 0.0100551 | 0.0647444 | 0.7047304 | 0.7200396 | 0.00e+00 | 0 | 0.3732428 |
I was curious about which variables were included in the top models. Below are two plots, where the models on the x-axis are sorted from best to worst in terms of their performance score.
The first plot shows which variables are in each model. Time series length (tslength) and Binomial were included in all the models. From this plot, we can see that Class is in almost all of the top 6 models, while LifeSpan is only in one of the 6 models. BodySize is, on the other hand, in 3 of the top 6 models, as is System. TrophicLevel does not show up in any of the top models, and is therefore likely to not be an important variable in this model.
The second plot shows the scores for each metric we can use to compare the models. The performance score, which summarises the scores of all metrics, was used to sort the models. RMSE is pretty coherent with this performance score, but we can see that we would have different “top models” if we looked at AIC_wt or BIC_wt rather than performance. In the case of AIC_wt, the 2nd best model (m15) would also include LifeSpan. Something to discuss!
The above figure only showed the scores ranked iwthin each metric, but it is also helpful to look at how quantitatively different these metrics are. For example, even in the top models, the explanatory power of the models is always quite low (R2_conditional and R2_marginal). Generally, there are only very minor differences between the models (if we look at the y-axes).
Model 16 was the best model in terms of performance score, RMSE, AIC and BIC:
sumlambda ~ Class*LifeSpan + (1|Binomial) + tslength
## $Class
##
## $LifeSpan
##
## $tslength
Model 6 was second best overall, had the second lowest RMSE (root mean squared error), and had the lowest sigma (residual standard deviation):
sumlambda ~ Class*log10(BodySize) + (1|Binomial) + tslength
## $Class
##
## $BodySize
##
## $tslength
Average lambdas are the average of the annual rates of change in population abundance for the time period considered.
These are models to evaluate the relationship between average population change and body size, trophic level, and lifespan traits, while accounting for differences in System and Class, and including a random effect by Binomial because the trait data was collected at the species level.
# null model
m0 = lmer(avlambda ~ (1|Binomial), data = df)
m1 = lmer(avlambda ~ Class + (1|Binomial), data = df)
m2 = lmer(avlambda ~ Class + System + (1|Binomial), data = df)
m3 = lmer(avlambda ~ System + (1|Binomial), data = df)
# adding traits
# body size only
# class should be in each model because body size is normalised within classes
m4 = lmer(avlambda ~ log10(BodySize) + Class + (1|Binomial), data = df)
m5 = lmer(avlambda ~ log10(BodySize) + Class + System + (1|Binomial), data = df)
m6 = lmer(avlambda ~ Class*log10(BodySize) + (1|Binomial), data = df)
m7 = lmer(avlambda ~ System*log10(BodySize) + (1|Binomial), data = df)
# trophic level only
m8 = lmer(avlambda ~ TrophicLevel + (1|Binomial), data = df)
m9 = lmer(avlambda ~ TrophicLevel + Class + (1|Binomial), data = df)
m10 = lmer(avlambda ~ TrophicLevel + System + (1|Binomial), data = df)
m11 = lmer(avlambda ~ Class*TrophicLevel + (1|Binomial), data = df)
m12 = lmer(avlambda ~ System*TrophicLevel + (1|Binomial), data = df)
# lifespan only
m13 = lmer(avlambda ~ LifeSpan + (1|Binomial), data = df)
m14 = lmer(avlambda ~ LifeSpan + Class + (1|Binomial), data = df)
m15 = lmer(avlambda ~ LifeSpan + System + (1|Binomial), data = df)
m16 = lmer(avlambda ~ Class*LifeSpan + (1|Binomial), data = df)
m17 = lmer(avlambda ~ System*LifeSpan + (1|Binomial), data = df)
# all three traits
m18 = lmer(avlambda ~ log10(BodySize) + TrophicLevel + Class + (1|Binomial), data = df)
m19 = lmer(avlambda ~ log10(BodySize) + LifeSpan + Class + (1|Binomial), data = df)
m20 = lmer(avlambda ~ TrophicLevel + LifeSpan + (1|Binomial), data = df)
m21 = lmer(avlambda ~ log10(BodySize) + TrophicLevel + LifeSpan + Class + (1|Binomial), data = df)
We compared models according to a suite of performance metrics, including:
These performance metrics are then combined into a performance score, which can be used to rank the models. The performance score is based on normalizing all indices (i.e. rescaling them to a range from 0 to 1), and taking the mean value of all indices for each model. his is mostly helpful as an exploratory metric, but is not necessarily enough to base interpretation on.
We can look at the metrics for the top 6 models below:
| Name | Model | R2_conditional | R2_marginal | ICC | RMSE | Sigma | AIC_wt | BIC_wt | Performance_Score |
|---|---|---|---|---|---|---|---|---|---|
| m12 | lmerMod | 0.0413731 | 0.0094439 | 0.0322336 | 0.1427925 | 0.1446303 | 0.0000001 | 0.0000000 | 0.6748597 |
| m0 | lmerMod | 0.0325788 | 0.0000000 | 0.0325788 | 0.1433753 | 0.1450729 | 0.9994594 | 0.9999840 | 0.5164743 |
| m16 | lmerMod | 0.0385925 | 0.0063139 | 0.0324836 | 0.1430507 | 0.1449479 | 0.0000000 | 0.0000000 | 0.4856007 |
| m17 | lmerMod | 0.0365969 | 0.0033761 | 0.0333334 | 0.1431466 | 0.1449834 | 0.0000000 | 0.0000000 | 0.4055570 |
| m10 | lmerMod | 0.0354866 | 0.0010076 | 0.0345137 | 0.1431630 | 0.1450241 | 0.0000000 | 0.0000000 | 0.3635092 |
| m3 | lmerMod | 0.0349035 | 0.0007474 | 0.0341816 | 0.1431983 | 0.1450023 | 0.0001195 | 0.0000003 | 0.3448089 |
Below are two plots asin the previous section, where the models on the x-axis are sorted from best to worst in terms of their performance score.
The first plot shows which variables are in each model. Binomial is included in all the models. From this plot, we can see that TrophicLevel is in the top model along with System, but only occurs one other time in the top 6 models. System is in 4 of the 6 top models. Our null model is the second best according to the performance score. The same model that was selected as the best when predicting summed lambdas ( LifeSpan and Class ) are in the next best model after the null model. BodySize does not show up in any of the top models, and is therefore likely to not be an important variable in this model.
The second plot shows the scores for each metric we can use to compare the models. The performance score, which summarises the scores of all metrics, was used to sort the models. RMSE and Sigma are pretty coherent with this performance score, but we can see that we would have different “top models” if we looked at AIC_wt or BIC_wt rather than performance. In the case of AIC_wt, the best model (m7) would also include BodySize. In the case of BIC_wt, the best model (m8) would include TrophicLevel. Something to discuss!
The above figure only showed the scores ranked within each metric, but it is also helpful to look at how quantitatively different these metrics are. For example, even in the top models, the explanatory power of the models is once again quite low (R2_conditional and R2_marginal). Generally, there are once again only very minor differences between the models (if we look at the y-axes).
Let’s look at the top models!
Model 12 was the best model in terms of performance score, explanatory power (R2), ICC, RMSE, Sigma, and BIC.
avlambda ~ System*TrophicLevel + (1|Binomial)
## Error: Confidence intervals could not be computed.
## * Reason: "non-conformable arguments"
## * Source: mm %*% vcm
## Error: Confidence intervals could not be computed.
## * Reason: "non-conformable arguments"
## * Source: mm %*% vcm
## $System
##
## $TrophicLevel
Model 0 was the 2nd best model in terms of performance score. However, it is not the cleanest…
avlambda ~ (1|Binomial)
Model 16 was the 3rd best model in terms of performance score, and AIC_wt. This model was selected as the best model based on performance score when predicting summed lambdas in the previous section.
avlambda ~ Class*LifeSpan + (1|Binomial)
## $Class
##
## $LifeSpan